Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add filters

Language
Document Type
Year range
1.
Psychological Test and Assessment Modeling ; 65(1):55-75, 2023.
Article in English | ProQuest Central | ID: covidwho-2306670

ABSTRACT

Keywords: Automated distractor generation, automated item generation, natural language processing, deep learning language models, prompt-based learning Language testing programs, like many other educational and psychological testing programs, face increasing demands for flexible test administrations. Since the COVID-19 pandemic, many language proficiency tests are offered to be taken at home with more available testing dates. [...]von Davier (2018) trained a long-short-term memory- (LSTM-) based recurrent neural network model and Hommel et al. [...]transformer-based models achieved state-of-the-art performance on a wide range of NLP benchmark tasks, such as the General Language Understanding Evaluation (GLUE;Wang et al., 2019), the Standard Question Answering Dataset (SQuAD;Rajpurkar et al., 2016), and the Situations with Adversarial Generations (SWAG;Zellers et al., 2018). A typical fine-tuning process consumes a large number of examples (oftentimes several tens of thousands), yet it is rare for a testing program to have such a large item pool. [...]we designed language prompts for distractors and leveraged the prompts in fine-tuning to address this small sample challenge.

SELECTION OF CITATIONS
SEARCH DETAIL